-
Notifications
You must be signed in to change notification settings - Fork 648
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[experimental] add simplified bsp and grpc exporter #3465
Conversation
opentelemetry-sdk/src/opentelemetry/sdk/trace/export/experimental/timer.py
Show resolved
Hide resolved
opentelemetry-sdk/src/opentelemetry/sdk/trace/export/experimental/timer.py
Show resolved
Hide resolved
opentelemetry-sdk/src/opentelemetry/sdk/trace/export/experimental/timer.py
Show resolved
Hide resolved
opentelemetry-sdk/src/opentelemetry/sdk/trace/export/experimental/timer.py
Show resolved
Hide resolved
This method must be extremely fast. It adds the span to the accumulator for later sending and pokes the timer | ||
if the number of spans waiting to be sent has reached the maximum batch size. | ||
""" | ||
size = self._accumulator.push(span) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This size could be wrong around the time that the batch fills up if another thread flushes first. Is that intentional, like a best effort?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question -- this implementation means that if you try to push a lot of spans through simultaneously with multiple threads, exports may contain batches of somewhat more or fewer than the max_batch_size number of spans. I figure this is okay at this stage of development.
But we may end up wanting to do something like: lock, append the span, test for length, generate something like a flush request, then release the lock. This is a fair amount of complexity though, which I'm trying to avoid. I'm hoping we can arrive at a solution that's maximally simple (suggestions always welcome).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have updated the Accumulator to do batching, so we should have uniform batch sizes now.
def _export(self) -> SpanExportResult: | ||
batch = self._accumulator.batch() | ||
if len(batch) > 0: | ||
out = self._exporter.export(batch) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure how we'd do it, but this still has the issue that there is no way to interrupt the exporter if it is in a backoff loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we should figure out how we want this to behave. I'm thinking the sleep during backoff should be interruptible (which can be done with an Event object), but the actual sending should not be (within some reasonable timeout).
9e8306a
to
f54bdd8
Compare
Spring cleaning. Hope to pick this up again later. |
While looking through the BatchSpanProcessor and OtlpSpanExporter, as well as implementing JSON over HTTP, I was curious to see what these classes might look like if we re-thought how they were designed with a view towards making them smaller and easier to understand.
To get feedback from this SIG about whether this direction looks promising, I'm creating a draft PR to discuss.